68 research outputs found

    Fast quantum subroutines for the simplex method

    Full text link
    We propose quantum subroutines for the simplex method that avoid classical computation of the basis inverse. For an m×nm \times n constraint matrix with at most dcd_c nonzero elements per column, at most dd nonzero elements per column or row of the basis, basis condition number κ\kappa, and optimality tolerance ϵ\epsilon, we show that pricing can be performed in O~(1ϵκdn(dcn+dm))\tilde{O}(\frac{1}{\epsilon}\kappa d \sqrt{n}(d_c n + d m)) time, where the O~\tilde{O} notation hides polylogarithmic factors. If the ratio n/mn/m is larger than a certain threshold, the running time of the quantum subroutine can be reduced to O~(1ϵκd1.5dcnm)\tilde{O}(\frac{1}{\epsilon}\kappa d^{1.5} \sqrt{d_c} n \sqrt{m}). The steepest edge pivoting rule also admits a quantum implementation, increasing the running time by a factor κ2\kappa^2. Classically, pricing requires O(dc0.7m1.9+m2+o(1)+dcn)O(d_c^{0.7} m^{1.9} + m^{2 + o(1)} + d_c n) time in the worst case using the fastest known algorithm for sparse matrix multiplication, and O(dc0.7m1.9+m2+o(1)+m2n)O(d_c^{0.7} m^{1.9} + m^{2 + o(1)} + m^2n) with steepest edge. Furthermore, we show that the ratio test can be performed in O~(tδκd2m1.5)\tilde{O}(\frac{t}{\delta} \kappa d^2 m^{1.5}) time, where t,δt, \delta determine a feasibility tolerance; classically, this requires O(m2)O(m^2) time in the worst case. For well-conditioned sparse problems the quantum subroutines scale better in mm and nn, and may therefore have a worst-case asymptotic advantage. An important feature of our paper is that this asymptotic speedup does not depend on the data being available in some "quantum form": the input of our quantum subroutines is the natural classical description of the problem, and the output is the index of the variables that should leave or enter the basis.Comment: Added discussion on condition number and infeasibilitie

    On the implementation of a global optimization method for mixed-variable problems

    Get PDF
    We describe the optimization algorithm implemented in the open-source derivative-free solver RBFOpt. The algorithm is based on the radial basis function method of Gutmann and the metric stochastic response surface method of Regis and Shoemaker. We propose several modifications aimed at generalizing and improving these two algorithms: (i) the use of an extended space to represent categorical variables in unary encoding; (ii) a refinement phase to locally improve a candidate solution; (iii) interpolation models without the unisolvence condition, to both help deal with categorical variables, and initiate the optimization before a uniquely determined model is possible; (iv) a master-worker framework to allow asynchronous objective function evaluations in parallel. Numerical experiments show the effectiveness of these ideas

    A local branching heuristic for MINLPs

    Full text link
    Local branching is an improvement heuristic, developed within the context of branch-and-bound algorithms for MILPs, which has proved to be very effective in practice. For the binary case, it is based on defining a neighbourhood of the current incumbent solution by allowing only a few binary variables to flip their value, through the addition of a local branching constraint. The neighbourhood is then explored with a branch-and-bound solver. We propose a local branching scheme for (nonconvex) MINLPs which is based on iteratively solving MILPs and NLPs. Preliminary computational experiments show that this approach is able to improve the incumbent solution on the majority of the test instances, requiring only a short CPU time. Moreover, we provide algorithmic ideas for a primal heuristic whose purpose is to find a first feasible solution, based on the same scheme

    Rounding-based heuristics for nonconvex MINLPs

    Get PDF
    We propose two primal heuristics for nonconvex mixed-integer nonlinear programs. Both are based on the idea of rounding the solution of a continuous nonlinear program subject to linear constraints. Each rounding step is accomplished through the solution of a mixed-integer linear program. Our heuristics use the same algorithmic scheme, but they differ in the choice of the point to be rounded (which is feasible for nonlinear constraints but possibly fractional) and in the linear constraints. We propose a feasibility heuristic, that aims at finding an initial feasible solution, and an improvement heuristic, whose purpose is to search for an improved solution within the neighborhood of a given point. The neighborhood is defined through local branching cuts or box constraints. Computational results show the effectiveness in practice of these simple ideas, implemented within an open-source solver for nonconvex mixed-integer nonlinear programs

    Core Routing on Dynamic Time-Dependent Road Networks

    Get PDF
    Route planning in large scale time-dependent road networks is an important practical application of the shortest paths problem that greatly benefits from speedup techniques. In this paper we extend a two-level hierarchical approach for pointto-point shortest paths computations to the time-dependent case. This method, also known as core routing in the literature for static graphs, consists in the selection of a small subnetwork where most of the computations can be carried out, thus reducing the search space. We combine this approach with bidirectional goal-directed search in order to obtain an algorithm capable of finding shortest paths in a matter of milliseconds on continental sized networks. Moreover, we tackle the dynamic scenario where the piecewise linear functions that we use to model time-dependent arc costs are not fixed, but can have their coefficients updated requiring only a small computational effort
    • …
    corecore